6 research outputs found
Imitating Driver Behavior with Generative Adversarial Networks
The ability to accurately predict and simulate human driving behavior is
critical for the development of intelligent transportation systems. Traditional
modeling methods have employed simple parametric models and behavioral cloning.
This paper adopts a method for overcoming the problem of cascading errors
inherent in prior approaches, resulting in realistic behavior that is robust to
trajectory perturbations. We extend Generative Adversarial Imitation Learning
to the training of recurrent policies, and we demonstrate that our model
outperforms rule-based controllers and maximum likelihood models in realistic
highway simulations. Our model both reproduces emergent behavior of human
drivers, such as lane change rate, while maintaining realistic control over
long time horizons.Comment: 8 pages, 6 figure
Modeling Human Driving Behavior through Generative Adversarial Imitation Learning
Imitation learning is an approach for generating intelligent behavior when
the cost function is unknown or difficult to specify. Building upon work in
inverse reinforcement learning (IRL), Generative Adversarial Imitation Learning
(GAIL) aims to provide effective imitation even for problems with large or
continuous state and action spaces. Driver modeling is one example of a problem
where the state and action spaces are continuous. Human driving behavior is
characterized by non-linearity and stochasticity, and the underlying cost
function is unknown. As a result, learning from human driving demonstrations is
a promising approach for generating human-like driving behavior. This article
describes the use of GAIL for learning-based driver modeling. Because driver
modeling is inherently a multi-agent problem, where the interaction between
agents needs to be modeled, this paper describes a parameter-sharing extension
of GAIL called PS-GAIL to tackle multi-agent driver modeling. In addition, GAIL
is domain agnostic, making it difficult to encode specific knowledge relevant
to driving in the learning process. This paper describes Reward Augmented
Imitation Learning (RAIL), which modifies the reward signal to provide
domain-specific knowledge to the agent. Finally, human demonstrations are
dependent upon latent factors that may not be captured by GAIL. This paper
describes Burn-InfoGAIL, which allows for disentanglement of latent variability
in demonstrations. Imitation learning experiments are performed using NGSIM, a
real-world highway driving dataset. Experiments show that these modifications
to GAIL can successfully model highway driving behavior, accurately replicating
human demonstrations and generating realistic, emergent behavior in the traffic
flow arising from the interaction between driving agents.Comment: 28 pages, 8 figures. arXiv admin note: text overlap with
arXiv:1803.0104
The Waymo Open Sim Agents Challenge
Simulation with realistic, interactive agents represents a key task for
autonomous vehicle software development. In this work, we introduce the Waymo
Open Sim Agents Challenge (WOSAC). WOSAC is the first public challenge to
tackle this task and propose corresponding metrics. The goal of the challenge
is to stimulate the design of realistic simulators that can be used to evaluate
and train a behavior model for autonomous driving. We outline our evaluation
methodology, present results for a number of different baseline simulation
agent methods, and analyze several submissions to the 2023 competition which
ran from March 16, 2023 to May 23, 2023. The WOSAC evaluation server remains
open for submissions and we discuss open problems for the task